27 research outputs found
BioSimMER: Virtual Reality Based Experiential Learning
Slides from a presentation given at the UNM Health Sciences Center
A computer-based training system combining virtual reality and multimedia
Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system
An introductory VR course for undergraduates incorporating foundation, experience and capstone
This paper presents the structure, pedagogy and motivation for an introductory undergraduate course in Virtual Reality. The course is offered as an elective at the 400-level, hence students taking the course are juniors and seniors who have completed a substantial portion of their Computer Science curriculum. The course incorporates multiple components of VR theory and practice, including hardware and software survey and analysis, human perception, and applications. It also contains a semester-long, hands-on development component utilizing a specific virtual reality environment. In addition, because VR is a broad, multidisciplinary field of study, the course provides an ideal environment for incorporating capstone elements that allow undergraduate students to tie together many of the computing principles learned during their undergraduate academic careers. Copyright 2005 ACM
BioSimMER: Virtual Reality Based Experiential Learning
Slides from a presentation given at the UNM Health Sciences Center.UNM Health Sciences Library and Informatics Cente
Visually-guided haptic object recognition
Sensory capabilities are vital if a robot is to function autonomously in unknown or partially specified environments, if it is to carry out complex, roughly detailed tasks, and if it is to interact with and to learn from the world around it. Perception forms the all important interface between the cogitative organism and the world in which it must act and survive. Hence the first step toward intelligent, autonomous robots is to develop this interface--to provide robots with perceptual capabilities. This work presents a model for robotic perception. Within the framework of this model, we have developed a system which utilizes passive vision and active touch for the task of object categorization. The system is organized as a highly modularized, distributed hierarchy of domain specific and informationally encapsulated knowledge-based experts. The visual subsystem is passive and consists of a two-dimensional region analysis and a three-dimensional edge analysis. The haptic subsystem is active and consists of a set of modules which either execute exploratory procedures to extract information from the world or which combine information from lower level modules into more complex representations. We also address the issues of visually-guided haptic exploration and intersensory integration. Finally, we establish representational and reasoning paradigms for dealing with generic objects. Both representation and reasoning are feature-based. The representation includes both definitional information in the form of a hierarchy of frames and spatial/geometric information in the form of the spatial polyhedron
Visually-guided haptic object recognition
Sensory capabilities are vital if a robot is to function autonomously in unknown or partially specified environments, if it is to carry out complex, roughly detailed tasks, and if it is to interact with and to learn from the world around it. Perception forms the all important interface between the cogitative organism and the world in which it must act and survive. Hence the first step toward intelligent, autonomous robots is to develop this interface--to provide robots with perceptual capabilities. This work presents a model for robotic perception. Within the framework of this model, we have developed a system which utilizes passive vision and active touch for the task of object categorization. The system is organized as a highly modularized, distributed hierarchy of domain specific and informationally encapsulated knowledge-based experts. The visual subsystem is passive and consists of a two-dimensional region analysis and a three-dimensional edge analysis. The haptic subsystem is active and consists of a set of modules which either execute exploratory procedures to extract information from the world or which combine information from lower level modules into more complex representations. We also address the issues of visually-guided haptic exploration and intersensory integration. Finally, we establish representational and reasoning paradigms for dealing with generic objects. Both representation and reasoning are feature-based. The representation includes both definitional information in the form of a hierarchy of frames and spatial/geometric information in the form of the spatial polyhedron
Recommended from our members
Vr/Is Lab Virtual Actor Research Overview
This overview presents current research at Sandia National Laboratories in the Virtual Reality and Intelligent Simulation Lab. Into an existing distributed VR environment which we have been developing, and which provides shared immersion for multiple users, we are adding virtual actor support. The virtual actor support we are adding to this environment is intended to provide semi-autonomous actors, with oversight and high-level guiding control by a director/user, and to allow the overall action to be driven by a scenario. We present an overview of the environment into which our virtual actors will be added in Section 3, and discuss the direction of the Virtual Actor research itself in Section 4. We will briefly review related work in Section 2. First however we need to place the research in the context of what motivates it. The motivation for our construction of this environment, and the line of research associated with it, is based on a long-term program of providing support, through simulation, for situational training, by which we mean a type of training in which students learn to handle multiple situations or scenarios. In these situations, the student may encounter events ranging from the routine occurance to the rare emergency. Indeed, the appeal of such training systems is that they could allow the student to experience and develop effective responses for situations they would otherwise have no opportunity to practice, until they happened to encounter an actual occurance. Examples of the type of students for this kind of training would be security forces or emergency response forces. An example of the type of training scenario we would like to support is given in Section 4.2
WeeBot: A novel method for infant control of a robotic mobility device
A novel method for controlling a robotic mobility platform, the WeeBot, is presented. The WeeBot permits an infant seated on the robot to control its motion by leaning in the direction of desired movement. The WeeBot hardware and software are discussed and the results of a pilot feasibility study are presented. This study shows that after five training sessions typically developing infants ages six to nine months were able to demonstrate directed movement of the WeeBot. © 2012 IEEE
Emotional and performance attributes of a VR game: A study of children
In this paper we present the results of a study to determine the effect and efficacy of a Virtual Reality game designed to elicit movements of the upper extremity. The study is part of an on-going research effort to explore the use of Virtual Reality as a means of improving the effectiveness of therapy for children with motor impairments. The current study addresses the following questions: 1. Does a VR game requiring repetitive motion sufficiently engage a child? 2. Are there detrimental physiological or sensory side-effects when a child uses an HMD-based VR? 3. Are the movements produced by a child while playing a VR game comparable to movements produced when carrying out a similar task in the realworld? Based on study results, the enjoyment level for the game was high. ANOVA performed on the results for physical well-being pre- and post-VR showed no overall ill-effects as perceived by the children. Playing the game did not effect proprioception based on pre- and post-VR test scores. Motion data show similar, but not identical, overall movement profiles for similar tasks performed in the real and virtual world. Motor learning occurs in both environments, as measured by time to complete a game cycle